Learning OT Constraint Rankings Using a Maximum Entropy Model

نویسندگان

  • Sharon Goldwater
  • Mark Johnson
چکیده

A weakness of standard Optimality Theory is its inability to account for grammars with free variation. We describe here the Maximum Entropy model, a general statistical model, and show how it can be applied in a constraint-based linguistic framework to model and learn grammars with free variation, as well as categorical grammars. We report the results of using the MaxEnt model for learning two different grammars: one with variation, and one without. Our results are as good as those of a previous probabilistic version of OT, the Gradual Learning Algorithm (Boersma, 1997), and we argue that our model is more general and mathematically well-motivated.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence of Error-driven Ranking Algorithms

According to the OT error-driven ranking model of language acquisition, the learner performs a sequence of slight re-rankings triggered by mistakes on the incoming stream of data, until it converges to a ranking that makes no more mistakes. This learning model is very popular in the OT acquisition literature, in particular because it predicts a sequence of rankings that models gradualness in ch...

متن کامل

Counting Rankings

In this paper, I present a recursive algorithm that calculates the number of rankings that are consistent with a set of data (i.e. optimal candidates) in the framework of Optimality Theory. The ability to compute this quantity, which I call the r-volume, makes possible a simple and effective Bayesian heuristic in learning – all else equal, choose the candidate preferred by the highest number of...

متن کامل

The Benefits of Errors: Learning an OT Grammar with a Structured Candidate Set

We compare three recent proposals adding a topology to OT: McCarthy’s Persistent OT, Smolensky’s ICS and Bı́ró’s SA-OT. To test their learnability, constraint rankings are learnt from SA-OT’s output. The errors in the output, being more than mere noise, follow from the topology. Thus, the learner has to reconstructs her competence having access only to the teacher’s performance.

متن کامل

Stochastic OT as a model of constraint interaction

Stochastic OT is a set of specific assumptions about the mechanism of interaction between universal linguistic constraints: the choice of optimal candidates is determined by the rankings of constraint weights, but the rankings can be “perturbed” by a normally distributed “evaluation noise” (Boersma and Hayes 2001); the combined weight of a set of cooperating constraints is thus assumed to equal...

متن کامل

Invited talk: Counting Rankings

In this talk, I present a recursive algorithm to calculate the number of rankings that are consistent with a set of data (optimal candidates) in the framework of Optimality Theory (OT; Prince and Smolensky 1993).1 Computing this quantity, which I call r-volume, makes possible a simple and effective Bayesian heuristic in learning – all else equal, choose candidates that are preferred by the high...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2003